Combining multi-fidelity modelling and asynchronous batch Bayesian Optimization

نویسندگان

چکیده

Bayesian Optimization is a useful tool for experiment design. Unfortunately, the classical, sequential setting of does not translate well into laboratory experiments, instance battery design, where measurements may come from different sources and their evaluations require significant waiting times. Multi-fidelity addresses with sources. Asynchronous batch provides framework to select new experiments before results prior are revealed. This paper proposes an algorithm combining multi-fidelity asynchronous methods. We empirically study behaviour, show it can outperform single-fidelity methods As application, we consider designing electrode materials optimal performance in pouch cells using coin approximate performance.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-fidelity optimization via surrogate modelling

This paper demonstrates the application of correlated Gaussian process based approximations to optimization where multiple levels of analysis are available, using an extension to the geostatistical method of co-kriging. An exchange algorithm is used to choose which points of the search space to sample within each level of analysis. The derivation of the co-kriging equations is presented in an i...

متن کامل

Continuous-fidelity Bayesian Optimization

While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multifidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO a...

متن کامل

Continuous-fidelity Bayesian Optimization

While Bayesian optimization (BO) has achieved great success in optimizing expensive-to-evaluate black-box functions, especially tuning hyperparameters of neural networks, methods such as random search (Li et al., 2016) and multifidelity BO (e.g. Klein et al. (2017)) that exploit cheap approximations, e.g. training on a smaller training data or with fewer iterations, can outperform standard BO a...

متن کامل

Dynamic Batch Bayesian Optimization

Bayesian optimization (BO) algorithms try to optimize an unknown function that is expensive to evaluate using minimum number of evaluations/experiments. Most of the proposed algorithms in BO are sequential, where only one experiment is selected at each iteration. This method can be time inefficient when each experiment takes a long time and more than one experiment can be ran concurrently. On t...

متن کامل

Hybrid Batch Bayesian Optimization

Bayesian Optimization (BO) aims at optimizing an unknown function that is costly to evaluate. We focus on applications where concurrent function evaluations are possible. In such cases, BO could choose to either sequentially evaluate the function (sequential mode) or evaluate the function at a batch of multiple inputs at once (batch mode). The sequential mode generally leads to better optimizat...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computers & Chemical Engineering

سال: 2023

ISSN: ['1873-4375', '0098-1354']

DOI: https://doi.org/10.1016/j.compchemeng.2023.108194